140 research outputs found
6D SLAM with Cached kd-tree Search
6D SLAM (Simultaneous Localization and Mapping) or 6D Concurrent
Localization and Mapping of mobile robots considers six degrees of
freedom for the robot pose, namely, the x, y and z coordinates
and the roll, yaw and pitch angles. In previous work we presented our
scan matching based 6D SLAM approach, where scan matching is
based on the well known iterative closest point (ICP) algorithm
[Besl 1992]. Efficient implementations of this algorithm are a
result of a fast computation of closest points. The usual approach,
i.e., using kd-trees is extended in this paper. We describe a novel
search stategy, that leads to significant speed-ups. Our mapping
system is real-time capable, i.e., 3D maps are computed using the
resources of the used Kurt3D robotic hardware
Turning an action formalism into a planner
The paper describes a case study that explores the idea of building a planner with a neat semantics of the plans it produces, by choosing some action formalism that is "ideal" for the planning application and building the planner accordingly. In general-and particularly so for the action formalism used in this study, which is quite expressive-this strategy is unlikely to yield fast and efficient planners if the formalism is used naively. Therefore, we adopt the idea that the planner approximates the theoretically ideal plans, where the approximation gets closer, the more run time the planner is allowed. As the particular formalism underlying our study allows a significant degree of uncertainty to be modeled and copes with the ramification problem, we end up in a planner that is functionally comparable to modern anytime uncertainty planners, yet is based on a neat formal semantics. To appear in the Journal of Logic and Computation, 1994. The paper is written in English
Rmagine: 3D Range Sensor Simulation in Polygonal Maps via Raytracing for Embedded Hardware on Mobile Robots
Sensor simulation has emerged as a promising and powerful technique to find
solutions to many real-world robotic tasks like localization and pose
tracking.However, commonly used simulators have high hardware requirements and
are therefore used mostly on high-end computers. In this paper, we present an
approach to simulate range sensors directly on embedded hardware of mobile
robots that use triangle meshes as environment map. This library called Rmagine
allows a robot to simulate sensor data for arbitrary range sensors directly on
board via raytracing. Since robots typically only have limited computational
resources, the Rmagine aims at being flexible and lightweight, while scaling
well even to large environment maps. It runs on several platforms like Laptops
or embedded computing boards like Nvidia Jetson by putting an unified API over
the specific proprietary libraries provided by the hardware manufacturers. This
work is designed to support the future development of robotic applications
depending on simulation of range data that could previously not be computed in
reasonable time on mobile systems
Online Context-based Object Recognition for Mobile Robots
This work proposes a robotic object recognition
system that takes advantage of the contextual information latent
in human-like environments in an online fashion. To fully leverage
context, it is needed perceptual information from (at least) a
portion of the scene containing the objects of interest, which could
not be entirely covered by just an one-shot sensor observation.
Information from a larger portion of the scenario could still
be considered by progressively registering observations, but this
approach experiences difficulties under some circumstances, e.g.
limited and heavily demanded computational resources, dynamic
environments, etc. Instead of this, the proposed recognition
system relies on an anchoring process for the fast registration
and propagation of objects’ features and locations beyond the
current sensor frustum. In this way, the system builds a graphbased
world model containing the objects in the scenario (both
in the current and previously perceived shots), which is exploited
by a Probabilistic Graphical Model (PGM) in order to leverage
contextual information during recognition. We also propose a
novel way to include the outcome of local object recognition
methods in the PGM, which results in a decrease in the usually
high CRF learning complexity. A demonstration of our proposal
has been conducted employing a dataset captured by a mobile
robot from restaurant-like settings, showing promising results.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech
06231 Abstracts Collection -- Towards Affordance-Based Robot Control
From June 5 to June 9, 2006, the Dagstuhl Seminar 06231 ``Towards Affordance-Based Robot Control\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper.
%The first section describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available.
Additionally, papers related to a selection of the above-mentioned presentations willbe published in a proceedings volume (Springer LNAI) early in 2007
MICP-L: Mesh-based ICP for Robot Localization using Hardware-Accelerated Ray Casting
Triangle mesh maps have proven to be a versatile 3D environment
representation for robots to navigate in challenging indoor and outdoor
environments exhibiting tunnels, hills and varying slopes. To make use of these
mesh maps, methods are needed that allow robots to accurately localize
themselves to perform typical tasks like path planning and navigation. We
present Mesh ICP Localization (MICP-L), a novel and computationally efficient
method for registering one or more range sensors to a triangle mesh map to
continuously localize a robot in 6D, even in GPS-denied environments. We
accelerate the computation of ray casting correspondences (RCC) between range
sensors and mesh maps by supporting different parallel computing devices like
multicore CPUs, GPUs and the latest NVIDIA RTX hardware. By additionally
transforming the covariance computation into a reduction operation, we can
optimize the initial guessed poses in parallel on CPUs or GPUs, making our
implementation applicable in real-time on a variety of target architectures. We
demonstrate the robustness of our localization approach with datasets from
agriculture, drones, and automotive domains
Salient Visual Features to Help Close the Loop in 6D SLAM
One fundamental problem in mobile robotics research is _Simultaneous Localization and Mapping_ (SLAM): A mobile robot has to localize itself in an unknown environment, and at the same time generate a map of the surrounding area. One fundamental part of SLAM algorithms is loop closing: The robot detects whether it has reached an area that has been visited before, and uses this information to improve the pose estimate in the next step. In this work, visual camera features are used to assist closing the loop in an existing 6 degree of freedom SLAM (6D SLAM) architecture. For our robotics application we propose and evaluate several detection methods, including salient region detection and maximally stable extremal region detection. The detected regions are encoded using SIFT descriptors and stored in a database. Loops are detected by matching of the images' descriptors. A comparison of the different feature detection methods shows that the combination of salient and maximally stable extremal regions suggested by Newman and Ho performs moderately
- …